About half the people here are software developers, many more are mathematicians as well. I’ve also seen the intellectual work some of them output—which is what you declared we should evaluate people on—and it is orders of magnitude more impressive than impressive than what we have seen from you.
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
I guess the point was that if we are going to consider “software developer output” even as a weak evidence in this debate, why consider Eliezer’s output, and not the best output of people who agree with him?
An analogy: Imagine that there is a mathematical problem. A twelve-years old child solves the problem and says “x = 10”. Then a university professor of mathematics looks at the problem and the solution and says “indeed, you are right”. Taken this story as a whole, would you judge the “x = 10” hypothesis by credentials of the child, or of the professor? Further, imagine that another university professor of mathematics looks at the problem and says “actually, this is wrong, x = 12“; and then the two professors start a long discussion whether “x = 10” or “x = 12”. Again, would you frame this debate as a “child versus professor” debate or as a “professor against professor” debate?
The point is, the argument “I am impressive software developer and I say EY is wrong, and EY is not an impressive software developer” is weakened by saying “well there are other impressive software developers that say EY is right”.
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
It probably depends on your values. Most people are more worried by being hit by a car than they are about being eaten by a superintelligent machine. With common values, their beliefs about which issue is more important are absolutely justified.
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
I guess the point was that if we are going to consider “software developer output” even as a weak evidence in this debate, why consider Eliezer’s output, and not the best output of people who agree with him?
An analogy: Imagine that there is a mathematical problem. A twelve-years old child solves the problem and says “x = 10”. Then a university professor of mathematics looks at the problem and the solution and says “indeed, you are right”. Taken this story as a whole, would you judge the “x = 10” hypothesis by credentials of the child, or of the professor? Further, imagine that another university professor of mathematics looks at the problem and says “actually, this is wrong, x = 12“; and then the two professors start a long discussion whether “x = 10” or “x = 12”. Again, would you frame this debate as a “child versus professor” debate or as a “professor against professor” debate?
The point is, the argument “I am impressive software developer and I say EY is wrong, and EY is not an impressive software developer” is weakened by saying “well there are other impressive software developers that say EY is right”.
It probably depends on your values. Most people are more worried by being hit by a car than they are about being eaten by a superintelligent machine. With common values, their beliefs about which issue is more important are absolutely justified.